248 research outputs found

    Photogalvanic Effect in Different 2D Materials

    Get PDF
    Spintronics is based on the control of electron spin properties. Spin-polarized currents and pure spin currents are vital to spintronics. As a method to generate current only by illumination without imposing the bias voltage, the Photogalvanic effect now is used to realize spin-polarized current or pure spin current. In order to continuously reduce the size, it is particularly important to build spintronic devices at low-dimensional scales. Because of their outstanding optical and electronic properties, 2 dimensional materials have received extensive concern and research these years. In this review, we review the photoresponse of different materials under different illuminations and the use of Photogalvanic effect to generate spin-polarized currents and even pure spin currents. The materials studied in this article include two-dimensional transition metal dichalcogenides, Zigzag silicon-carbide nanoribbons and graphene nanoribbons at the edges of armchairs. This review article provides possible directions for material selection for new spintronic devices

    Explaining Dynamic Graph Neural Networks via Relevance Back-propagation

    Full text link
    Graph Neural Networks (GNNs) have shown remarkable effectiveness in capturing abundant information in graph-structured data. However, the black-box nature of GNNs hinders users from understanding and trusting the models, thus leading to difficulties in their applications. While recent years witness the prosperity of the studies on explaining GNNs, most of them focus on static graphs, leaving the explanation of dynamic GNNs nearly unexplored. It is challenging to explain dynamic GNNs, due to their unique characteristic of time-varying graph structures. Directly using existing models designed for static graphs on dynamic graphs is not feasible because they ignore temporal dependencies among the snapshots. In this work, we propose DGExplainer to provide reliable explanation on dynamic GNNs. DGExplainer redistributes the output activation score of a dynamic GNN to the relevances of the neurons of its previous layer, which iterates until the relevance scores of the input neuron are obtained. We conduct quantitative and qualitative experiments on real-world datasets to demonstrate the effectiveness of the proposed framework for identifying important nodes for link prediction and node regression for dynamic GNNs

    Balanced Coarsening for Multilevel Hypergraph Partitioning via Wasserstein Discrepancy

    Full text link
    We propose a balanced coarsening scheme for multilevel hypergraph partitioning. In addition, an initial partitioning algorithm is designed to improve the quality of k-way hypergraph partitioning. By assigning vertex weights through the LPT algorithm, we generate a prior hypergraph under a relaxed balance constraint. With the prior hypergraph, we have defined the Wasserstein discrepancy to coordinate the optimal transport of coarsening process. And the optimal transport matrix is solved by Sinkhorn algorithm. Our coarsening scheme fully takes into account the minimization of connectivity metric (objective function). For the initial partitioning stage, we define a normalized cut function induced by Fiedler vector, which is theoretically proved to be a concave function. Thereby, a three-point algorithm is designed to find the best cut under the balance constraint

    AutoTransfer: AutoML with Knowledge Transfer -- An Application to Graph Neural Networks

    Full text link
    AutoML has demonstrated remarkable success in finding an effective neural architecture for a given machine learning task defined by a specific dataset and an evaluation metric. However, most present AutoML techniques consider each task independently from scratch, which requires exploring many architectures, leading to high computational cost. Here we propose AutoTransfer, an AutoML solution that improves search efficiency by transferring the prior architectural design knowledge to the novel task of interest. Our key innovation includes a task-model bank that captures the model performance over a diverse set of GNN architectures and tasks, and a computationally efficient task embedding that can accurately measure the similarity among different tasks. Based on the task-model bank and the task embeddings, we estimate the design priors of desirable models of the novel task, by aggregating a similarity-weighted sum of the top-K design distributions on tasks that are similar to the task of interest. The computed design priors can be used with any AutoML search algorithm. We evaluate AutoTransfer on six datasets in the graph machine learning domain. Experiments demonstrate that (i) our proposed task embedding can be computed efficiently, and that tasks with similar embeddings have similar best-performing architectures; (ii) AutoTransfer significantly improves search efficiency with the transferred design priors, reducing the number of explored architectures by an order of magnitude. Finally, we release GNN-Bank-101, a large-scale dataset of detailed GNN training information of 120,000 task-model combinations to facilitate and inspire future research.Comment: ICLR 202

    The News Delivery Channel Recommendation Based on Granular Neural Network

    Full text link
    With the continuous maturation and expansion of neural network technology, deep neural networks have been widely utilized as the fundamental building blocks of deep learning in a variety of applications, including speech recognition, machine translation, image processing, and the creation of recommendation systems. Therefore, many real-world complex problems can be solved by the deep learning techniques. As is known, traditional news recommendation systems mostly employ techniques based on collaborative filtering and deep learning, but the performance of these algorithms is constrained by the sparsity of the data and the scalability of the approaches. In this paper, we propose a recommendation model using granular neural network model to recommend news to appropriate channels by analyzing the properties of news. Specifically, a specified neural network serves as the foundation for the granular neural network that the model is considered to be build. Different information granularities are attributed to various types of news material, and different information granularities are released between networks in various ways. When processing data, granular output is created, which is compared to the interval values pre-set on various platforms and used to quantify the analysis's effectiveness. The analysis results could help the media to match the proper news in depth, maximize the public attention of the news and the utilization of media resources

    Topological inverse band theory in waveguide quantum electrodynamics

    Full text link
    Topological phases play a crucial role in the fundamental physics of light-matter interaction and emerging applications of quantum technologies. However, the topological band theory of waveguide QED systems is known to break down, because the energy bands become disconnected. Here, we introduce a concept of the inverse energy band and explore analytically topological scattering in a waveguide with an array of quantum emitters. We uncover a rich structure of topological phase transitions, symmetric scale-free localization, completely flat bands, and the corresponding dark Wannier states. Although bulk-edge correspondence is partially broken because of radiative decay, we prove analytically that the scale-free localized states are distributed in a single inverse energy band in the topological phase and in two inverse bands in the trivial phase. Surprisingly, the winding number of the scattering textures depends on both the topological phase of inverse subradiant band and the odevity of the cell number. Our work uncovers the field of the topological inverse bands, and it brings a novel vision to topological phases in light-matter interactions.Comment: Accepted for publication in Phys. Rev. Let

    Optimization of Combined Casing Treatment Structure Applied in a Transonic Axial Compressor Based on Surrogate Model

    Get PDF
    For modern high load compressors, an excellent stability-enhancing capability by casing treatment (CT) is desirable. However, it is very time consuming to accomplish effective CT design. In this study, a new combined CT structure composed of axial skewed slots and end-wall injection, was proposed to be installed in transonic axial compressors to improve the overall performance. Considering the high computation cost for CFD simulation of the flow field in transonic compressor, a Gaussian Process Regression (GPR) surrogate model combined with Latin hypercube sampling, was utilized to predict compressor performance. For optimization process, a multi-objective evolutionary algorithm (NSGA-Ⅱ) was adopted to obtain the Pareto-optimal front. The main geometric parameters of the slot and the mass-flow rate of injection were selected as design parameters, with the peak efficiency and pressure ratio being two objectives. The results indicated that the surrogate model works well in capturing the key features of the concerning target and accelerating the optimization process. The optimal scheme of the combined CT was found able to increase stall margin (SM) by 19.5% with low efficiency penalty, showing a better performance than the reference combined casing treatment (CCT) scheme. What’s more, the analysis results of entropy generation showed that the superior effect of optimized scheme (OPT) can be attributed to the improvement of exchange flow in slots and the decreased loss in the whole passage

    A Multi-Transformation Evolutionary Framework for Influence Maximization in Social Networks

    Full text link
    Influence maximization is a crucial issue for mining the deep information of social networks, which aims to select a seed set from the network to maximize the number of influenced nodes. To evaluate the influence spread of a seed set efficiently, existing studies have proposed transformations with lower computational costs to replace the expensive Monte Carlo simulation process. These alternate transformations, based on network prior knowledge, induce different search behaviors with similar characteristics to various perspectives. Specifically, it is difficult for users to determine a suitable transformation a priori. This article proposes a multi-transformation evolutionary framework for influence maximization (MTEFIM) with convergence guarantees to exploit the potential similarities and unique advantages of alternate transformations and to avoid users manually determining the most suitable one. In MTEFIM, multiple transformations are optimized simultaneously as multiple tasks. Each transformation is assigned an evolutionary solver. Three major components of MTEFIM are conducted via: 1) estimating the potential relationship across transformations based on the degree of overlap across individuals of different populations, 2) transferring individuals across populations adaptively according to the inter-transformation relationship, and 3) selecting the final output seed set containing all the transformation's knowledge. The effectiveness of MTEFIM is validated on both benchmarks and real-world social networks. The experimental results show that MTEFIM can efficiently utilize the potentially transferable knowledge across multiple transformations to achieve highly competitive performance compared to several popular IM-specific methods. The implementation of MTEFIM can be accessed at https://github.com/xiaofangxd/MTEFIM.Comment: This work has been submitted to the IEEE Computational Intelligence Magazine for publication. Copyright may be transferred without notice, after which this version may no longer be accessibl

    RepBNN: towards a precise Binary Neural Network with Enhanced Feature Map via Repeating

    Full text link
    Binary neural network (BNN) is an extreme quantization version of convolutional neural networks (CNNs) with all features and weights mapped to just 1-bit. Although BNN saves a lot of memory and computation demand to make CNN applicable on edge or mobile devices, BNN suffers the drop of network performance due to the reduced representation capability after binarization. In this paper, we propose a new replaceable and easy-to-use convolution module RepConv, which enhances feature maps through replicating input or output along channel dimension by β\beta times without extra cost on the number of parameters and convolutional computation. We also define a set of RepTran rules to use RepConv throughout BNN modules like binary convolution, fully connected layer and batch normalization. Experiments demonstrate that after the RepTran transformation, a set of highly cited BNNs have achieved universally better performance than the original BNN versions. For example, the Top-1 accuracy of Rep-ReCU-ResNet-20, i.e., a RepBconv enhanced ReCU-ResNet-20, reaches 88.97% on CIFAR-10, which is 1.47% higher than that of the original network. And Rep-AdamBNN-ReActNet-A achieves 71.342% Top-1 accuracy on ImageNet, a fresh state-of-the-art result of BNNs. Code and models are available at:https://github.com/imfinethanks/Rep_AdamBNN.Comment: This paper has absolutely nothing to do with repvgg, rep means repeatin
    • …
    corecore